#].
In our experiments, we took #math48#D = [0, 1]×[0, 1], f≡ 0,
c = d = 10, and discretized (#bratu#317>) using the usual second-order
centered differences over a #math49#100×100 mesh of equally
spaced points in D. In <#495#>GMRES<#495#>(m), we took m = 10 and used fast
Poisson right preconditioning as in the experiments in §2. The computing
environment was as described in §2. All computing was done
in double precision.
<#1398#>Figure<#1398#>:
<#1399#><#320#> Log<#320#>10 of the residual norm versus the number of
<#322#> GMRES(m)<#322#> iterations for the finite difference methods.<#1399#>
#figure318# |
In the first set of experiments, we allowed each method to
run for 40 <#325#><#496#>GMRES(m)<#496#><#325#> iterations, starting with zero as the initial
approximate solution, after which the limit of residual norm
reduction had been reached. The results are shown in Fig.~#diff#326>.
In Fig.~#diff#327>, the top curve was produced by method FD1.
The second curve from the top is actually a superposition of
the curves produced by methods EHA2 and FD2; the two curves are
visually indistinguishable. Similarly, the third curve from
the top is a superposition of the curves produced by methods EHA4
and FD4, and the fourth curve from the top, which lies barely above
the bottom curve, is a superposition of the curves produced by
methods EHA6 and FD6. The bottom curve was produced by method A.
In the second set of experiments, our purpose was to assess the
relative amount of computational work required by the methods
which use higher-order differencing to reach comparable levels
of residual norm reduction. We compared pairs of methods EHA2
and FD2, EHA4 and FD4, and EHA6 and FD6 by observing in each of
20 trials the number of <#328#><#497#>GMRES(m)<#497#><#328#> iterations, number of F-evaluations,
and run time required by each method to reduce the residual norm
by a factor of #math50#ε, where for each pair of methods #math51#ε was chosen
to be somewhat greater than the limiting ratio of final to
initial residual norms obtainable by the methods. In these trials,
the initial approximate solutions were obtained by generating random
components as in the similar experiments in §2. We note that for every
method, the numbers of <#329#><#500#>GMRES(m)<#500#><#329#> iterations and F-evaluations required
before termination did not vary at all over the 20 trials. The <#330#><#501#>GMRES(m)<#501#><#330#>
iteration counts, numbers of F-evaluations, and means and standard
deviations of the run times are given in Table #diffstats#331>.
<#483#>
Table:
Statistics over 20 trials of GMRES(m) iteration numbers,
F-evaluations, and run times required to reduce the residual norm by
a factor of #math52#ε. For each method, the number of GMRES(m) iterations
and F-evaluations was the same in every trial.
|
|
Number of |
Number of |
Mean Run Time |
Standard |
Method |
#math54#ε |
Iterations |
F-Evaluations |
(Seconds) |
Deviation |
EHA2 |
10-10 |
26 |
32 |
47.12 |
.1048 |
FD2 |
10-10 |
26 |
58 |
53.79 |
.1829 |
EHA4 |
10-12 |
30 |
42 |
56.76 |
.1855 |
FD4 |
10-12 |
30 |
132 |
81.35 |
.3730 |
EHA6 |
10-12 |
30 |
48 |
58.56 |
.1952 |
FD6 |
10-12 |
30 |
198 |
100.6 |
.3278 |
<#483#>
In our first set of experiments, we took c = d = 10 and used right
preconditioning with a fast Poisson solver from <#363#><#504#>FISHPACK<#504#><#363#>
[#Swarztrauber-Sweet##1###], which is very effective for these
fairly small values of c and d. We first started each method
with zero as the initial approximate solution and allowed it
to run for 40 <#365#><#505#>GMRES(m)<#505#><#365#> iterations, after which the limit of residual
norm reduction had been reached. Figure #pdep#366> shows plots
of the logarithm of the Euclidean norm of the residual versus
the number of <#367#><#506#>GMRES(m)<#506#><#367#> iterations for the three methods. We note
that in Fig.~#pdep#368> and in all other figures below, the plotted
residual norms were not the values maintained by <#369#><#507#>GMRES(m)<#507#><#369#>, but rather
were computed as accurately as possible ``from scratch.'' That is,
at each <#370#><#508#>GMRES(m)<#508#><#370#> iteration, the current approximate solution was
formed and its product with the coefficient matrix was subtracted
from the right-hand side, all in double precision.
It was important to compute the residual norms in this way because
the values maintained by <#371#><#509#>GMRES(m)<#509#><#371#> become increasingly untrustworthy
as the limits of residual norm reduction are neared; see [#Walker88##1###].
It is seen in Fig.~#pdep#373> that Algorithm EHA achieved
the same ultimate level of residual norm reduction as the FDP
method and required only a few more <#374#><#510#>GMRES(m)<#510#><#374#> iterations to do
so.
<#1495#>Figure<#1495#>:
<#1496#><#377#> Log<#377#>10 of the residual norm versus the number of
<#379#> GMRES<#379#>(m) iterations for c = d = 10 with fast Poisson
preconditioning. Solid curve: Algorithm <#380#> EHA<#380#>; dotted
curve: <#381#> FDP<#381#> method; dashed curve: <#382#> FSP<#382#> method.<#1496#>
#figure375# |
In our second set of experiments, we took c = d = 100 and carried out
trials analogous to those in the first set above. No preconditioning
was used in these experiments, both because we wanted to compare
the methods without preconditioning and because the fast
Poisson preconditioning used in the first set of experiments is
not cost effective for these large values of c and d. We first
allowed each method to run for 600 <#385#><#511#>GMRES(m)<#511#><#385#> iterations,
starting with zero as the initial approximate solution, after which
the limit of residual norm reduction had been reached.